research guide
Research Guide: Pruning Techniques for Neural Networks
The authors of this paper propose a network pruning pipeline that allows for pruning from scratch. Based on experimentation with compression classification models on CIFAR10 and ImageNet datasets, the pipeline reduces pre-training overhead incurred while using normal pruning methods, and also increases the accuracy of the networks. Below is an illustration of the three stages involved in the traditional pruning process. This process involves pre-training, pruning, and fine-tuning. The pruning technique proposed in this paper involves building a pruning pipeline that can be learned from randomly initialized weights.
Research Guide: Model Distillation Techniques for Deep Learning
Knowledge distillation is a model compression technique whereby a small network (student) is taught by a larger trained neural network (teacher). The smaller network is trained to behave like the large neural network. This enables the deployment of such models on small devices such as mobile phones or other edge devices. In this guide, we'll look at a couple of papers that attempt to tackle this challenge. In this paper, a small model is trained to generalize in the same way as the larger teacher model.
Research Guide: Image Quality Assessment for Deep Learning
The authors of this paper compared 8 algorithms for blind IQA. They applied the AutoFolio system that trains an algorithm selector to choose the best-performing algorithm. They also trained a deep neural network to predict the best method. A CNN is trained to classify images according to which IQA method attains the best results. InceptionResNetV2 was used for the image classification problem.
Research Guide: Data Augmentation for Deep Learning
AutoAugment is an augmentation strategy that employs a search algorithm to find an augmentation policy that will yield the best results on the model. Each policy has several sub-policies. One sub-policy is randomly chosen for each image. Each sub-policy consists of an image processing function and the probability that the functions are applied with. The image processing operations could be translation, shearing or rotation.
Research Guides for Machine and Deep Learning
It's nearly impossible to keep up with all the latest amazing research that's happening all around the globe. From architecture optimization to task-based research and beyond, there are so many incredible efforts being undertaken to push the ML landscape into new, exciting frontiers. And while we can't possibly cover every new development, we have a number of excellent Heartbeat articles that review, summarize, and otherwise explore current research trends. This list should provide a good starting point for diving into some of the core ML research out there.
Research Guide: Advanced Loss Functions for Machine Learning Models
Logistic loss functions don't perform very well during training when the data in question is very noisy. Such noise can be caused by outliers and mislabeled data. In this paper, Google Brain authors aim to solve the shortcomings of the logistic loss function by replacing the logarithm and exponential functions with their corresponding "tempered" versions. The authors introduce a temperature into the exponential function and replace the softmax output layer of neural nets with a high-temperature generalization. The algorithm used in the log loss is replaced by a low-temperature logarithm.
Research Guide for Depth Estimation with Deep Learning
This paper proposes a fully convolutional architecture to address the problem of estimating the depth map of a scene given an RGB image. Modeling of the ambiguous mapping between monocular images and depth maps is done via residual learning. The reverse Huber loss is used for optimization. The model runs in real-time on images or videos. The approach proposed in this paper uses a CNN for depth estimation.